Shadow memory describes a computer science technique in which potentially every byte used by a program during its execution has a shadow byte or bytes. These shadow bytes are typically invisible to the original program and are used to record information about the original piece of data. The program is typically kept unaware of the existence of shadow memory by using a dynamic binary translator/instrumentor, which, among other things, may translate the original programs memory read and write operations into operations that do the original read and write and also update the shadow memory as necessary.
This technique when implemented naively has both high slowdown and relatively large RAM requirements. The shadow memory requirements can be lessened through two level tables, similar to those used by modern operating systems for virtual memory lookup. For example, initially all first level page entries (each entry might cover 128 kB) might point to a reserved "invalid" or "uninitialized" page entry. As the program reads/writes to/from a section specified by a page entry, then a new page entry is automatically created and initialized with the default values. Future reads/writes to that section will use that newly created entry.
The slowdown is harder to overcome, as each load and store must somehow result in the shadow memory being updated if 100% correctness is desired. Furthermore, modern CPUs perform relatively poorly when doing data intensive tasks due to the limits of local cache sizes.